812 research outputs found

    Tactile Mesh Saliency:a brief synopsis

    Get PDF
    This work has previously been published [LDS 16] and this extended abstract provides a synopsis for further discussion at the UK CGVC 2016 conference. We introduce the concept of tactile mesh saliency, where tactile salient points on a virtual mesh are those that a human is more likely to grasp, press, or touch if the mesh were a real-world object. We solve the problem of taking as input a 3D mesh and computing the tactile saliency of every mesh vertex. The key to solving this problem is in a new formulation that combines deep learning and learning-to-rank methods to compute a tactile saliency measure. Finally, we discuss possibilities for future work

    Learning a human-perceived softness measure of virtual 3D objects

    Get PDF
    We introduce the problem of computing a human-perceived softness measure for virtual 3D objects. As the virtual objects do not exist in the real world, we do not directly consider their physical properties but instead compute the human-perceived softness of the geometric shapes. We collect crowdsourced data where humans rank their perception of the softness of vertex pairs on virtual 3D models. We then compute shape descriptors and use a learning to-rank approach to learn a softness measure mapping any vertex to a softness value. Finally, we demonstrate our framework with a variety of 3D shapes

    Sketch2Stress: Sketching with Structural Stress Awareness

    Full text link
    In the process of product design and digital fabrication, the structural analysis of a designed prototype is a fundamental and essential step. However, such a step is usually invisible or inaccessible to designers at the early sketching phase. This limits the user's ability to consider a shape's physical properties and structural soundness. To bridge this gap, we introduce a novel approach Sketch2Stress that allows users to perform structural analysis of desired objects at the sketching stage. This method takes as input a 2D freehand sketch and one or multiple locations of user-assigned external forces. With the specially-designed two-branch generative-adversarial framework, it automatically predicts a normal map and a corresponding structural stress map distributed over the user-sketched underlying object. In this way, our method empowers designers to easily examine the stress sustained everywhere and identify potential problematic regions of their sketched object. Furthermore, combined with the predicted normal map, users are able to conduct a region-wise structural analysis efficiently by aggregating the stress effects of multiple forces in the same direction. Finally, we demonstrate the effectiveness and practicality of our system with extensive experiments and user studies.Comment: 16 figure

    Sketch Beautification: Learning Part Beautification and Structure Refinement for Sketches of Man-made Objects

    Full text link
    We present a novel freehand sketch beautification method, which takes as input a freely drawn sketch of a man-made object and automatically beautifies it both geometrically and structurally. Beautifying a sketch is challenging because of its highly abstract and heavily diverse drawing manner. Existing methods are usually confined to the distribution of their limited training samples and thus cannot beautify freely drawn sketches with rich variations. To address this challenge, we adopt a divide-and-combine strategy. Specifically, we first parse an input sketch into semantic components, beautify individual components by a learned part beautification module based on part-level implicit manifolds, and then reassemble the beautified components through a structure beautification module. With this strategy, our method can go beyond the training samples and handle novel freehand sketches. We demonstrate the effectiveness of our system with extensive experiments and a perceptive study.Comment: 13 figure

    Discovery and Analysis of Activity Pattern Cooccurrences in Business Process Models

    Get PDF
    Research on workflow activity patterns recently emerged in order to increase the reuse of recurring business functions (e.g., notification, approval, and decision). One important aspect is to identify pattern cooccurrences and to utilize respective information for creating modeling recommendations regarding the most suited activity patterns to be combined with an already used one. Activity patterns as well as their cooccurrences can be identified through the analysis of process models rather than event logs. Related to this problem, this paper proposes a method for discovering and analyzing activity pattern co-occurrences in business process models. Our results are used for developing a BPM tool which fosters the modeling of business processes based on the reuse of activity patterns. Our tool includes an inference engine whichconsiders the patterns co-occurrences to give design time recommendations for pattern usage

    Tactile mesh saliency

    Get PDF
    While the concept of visual saliency has been previously explored in the areas of mesh and image processing, saliency detection also applies to other sensory stimuli. In this paper, we explore the problem of tactile mesh saliency, where we define salient points on a virtual mesh as those that a human is more likely to grasp, press, or touch if the mesh were a real-world object. We solve the problem of taking as input a 3D mesh and computing the relative tactile saliency of every mesh vertex. Since it is difficult to manually define a tactile saliency measure, we introduce a crowdsourcing and learning framework. It is typically easy for humans to provide relative rankings of saliency between vertices rather than absolute values. We thereby collect crowdsourced data of such relative rankings and take a learning-to-rank approach. We develop a new formulation to combine deep learning and learning-to-rank methods to compute a tactile saliency measure. We demonstrate our framework with a variety of 3D meshes and various applications including material suggestion for rendering and fabricatio

    Learning Perceptual Aesthetics of 3D Shapes from Multiple Views

    Get PDF
    The quantification of 3D shape aesthetics has so far focused on specific shape features and manually defined criteria such as the curvature and the rule of thirds respectively. In this paper, we build a model of 3D shape aesthetics directly from human aesthetics preference data and show it to be well aligned with human perception of aesthetics. To build this model, we first crowdsource a large number of human aesthetics preferences by showing shapes in pairs in an online study and then use the same to build a 3D shape multi-view based deep neural network architecture to allow us learn a measure of 3D shape aesthetics. In comparison to previous approaches, we do not use any pre-defined notions of aesthetics to build our model. Our algorithmically computed measure of shape aesthetics is beneficial to a range of applications in graphics such as search, visualization and scene composition

    Improving style similarity metrics of 3D shapes

    Get PDF
    The idea of style similarity metrics has been recently developed for various media types such as 2D clip art and 3D shapes. We explore this style metric problem and improve existing style similarity metrics of 3D shapes in four novel ways. First, we consider the color and texture of 3D shapes which are important properties that have not been previously considered. Second, we explore the effect of clustering a dataset of 3D models by comparing between style metrics for a single object type and style metrics that combine clusters of object types. Third, we explore the idea of user-guided learning for this problem. Fourth, we introduce an iterative approach that can learn a metric from a general set of 3D models. We demonstrate these contributions with various classes of 3D shapes and with applications such as style-based similarity search and scene composition

    Schelling Meshes

    Get PDF

    PSC 352.01: American Political Thought

    Get PDF
    In this paper, we study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object and predict as output the semantic correspondence among the sketches. This problem is challenging since the visual features of corresponding points at different views can be very different. To this end, we take a deep learning approach and learn a novel local sketch descriptor from data. We contribute a training dataset by generating the pixel-level correspondence for the multi-view line drawings synthesized from 3D shapes. To handle the sparsity and ambiguity of sketches, we design a novel multi-branch neural network that integrates a patch-based representation and a multiscale strategy to learn the pixel-level correspondence among multi-view sketches. We demonstrate the effectiveness of our proposed approach with extensive experiments on hand-drawn sketches and multi-view line drawings rendered from multiple 3D shape datasets
    • …
    corecore